32 research outputs found

    Optoelectronic Reservoir Computing

    Get PDF
    Reservoir computing is a recently introduced, highly efficient bio-inspired approach for processing time dependent data. The basic scheme of reservoir computing consists of a non linear recurrent dynamical system coupled to a single input layer and a single output layer. Within these constraints many implementations are possible. Here we report an opto-electronic implementation of reservoir computing based on a recently proposed architecture consisting of a single non linear node and a delay line. Our implementation is sufficiently fast for real time information processing. We illustrate its performance on tasks of practical importance such as nonlinear channel equalization and speech recognition, and obtain results comparable to state of the art digital implementations.Comment: Contains main paper and two Supplementary Material

    Echo State Property of Deep Reservoir Computing Networks

    Get PDF
    In the last years, the Reservoir Computing (RC) framework has emerged as a state of-the-art approach for efficient learning in temporal domains. Recently, within the RC context, deep Echo State Network (ESN) models have been proposed. Being composed of a stack of multiple non-linear reservoir layers, deep ESNs potentially allow to exploit the advantages of a hierarchical temporal feature representation at different levels of abstraction, at the same time preserving the training efficiency typical of the RC methodology. In this paper, we generalize to the case of deep architectures the fundamental RC conditions related to the Echo State Property (ESP), based on the study of stability and contractivity of the resulting dynamical system. Besides providing a necessary condition and a sufficient condition for the ESP of layered RC networks, the results of our analysis provide also insights on the nature of the state dynamics in hierarchically organized recurrent models. In particular, we find out that by adding layers to a deep reservoir architecture, the regime of network’s dynamics can only be driven towards (equally or) less stable behaviors. Moreover, our investigation shows the intrinsic ability of temporal dynamics differentiation at the different levels in a deep recurrent architecture, with higher layers in the stack characterized by less contractive dynamics. Such theoretical insights are further supported by experimental results that show the effect of layering in terms of a progressively increased short-term memory capacity of the recurrent models

    Combining memory and non-linearity in echo state networks

    No full text
    Echo State Networks (ESNs) represent a successful methodology for efficient modeling of Recurrent Neural Networks. Untrained recurrent dynamics in ESNs apparently need to comply a trade-off between the two desirable features of implementing a long memory over past inputs and the ability of modeling non-linear dynamics. In this paper, we analyze such memory/non-linearity trade-off from the perspective of recurrent model design. In particular, we propose two variants to the standard ESN model, aiming at combining linear and non-linear dynamics both in the architectural setup of the recurrent system, and at the level of recurrent units activation functions. The proposed models are experimentally assessed on ad-hoc defined tasks as well as on standard benchmarks in the area of Reservoir Computing. Results show that the introduced ESN variants can grasp the proper trade-off between memory and non-linearity requirements, at the same time allowing to improve the performance of standard ESNs. Moreover, the analysis of the employed degree of non-linearity in the reservoir system can provide useful insights on the characterization of the learning task at hand

    Reservoir Computing: Quo Vadis?

    No full text
    Reservoir Computing (RC) is an umbrella term for adaptive computational paradigms that rely on an excitable dynamical system, also called the reservoir. The paradigms have been shown to be particularly promising for temporal signal processing. RC was also explored as a potential candidate for emerging nanoscale architectures. In this article we reflect on the current state of RC and muse about its future. In particular, we propose a set of open problems that we think need to be addressed in order to make RC more mainstream
    corecore